Recent advances in generative adversarial networks (GANs) have demonstrated the capabilities of generating stunning photo-realistic portrait images. While some prior works have applied such image GANs to unconditional 2D portrait video generation and static 3D portrait synthesis, there are few works successfully extending GANs for generating 3D-aware portrait videos. In this work, we propose PV3D, the first generative framework that can synthesize multi-view consistent portrait videos. Specifically, our method extends the recent static 3D-aware image GAN to the video domain by generalizing the 3D implicit neural representation to model the spatio-temporal space. To introduce motion dynamics to the generation process, we develop a motion generator by stacking multiple motion layers to generate motion features via modulated convolution. To alleviate motion ambiguities caused by camera/human motions, we propose a simple yet effective camera condition strategy for PV3D, enabling both temporal and multi-view consistent video generation. Moreover, PV3D introduces two discriminators for regularizing the spatial and temporal domains to ensure the plausibility of the generated portrait videos. These elaborated designs enable PV3D to generate 3D-aware motion-plausible portrait videos with high-quality appearance and geometry, significantly outperforming prior works. As a result, PV3D is able to support many downstream applications such as animating static portraits and view-consistent video motion editing. Code and models will be released at https://showlab.github.io/pv3d.
translated by 谷歌翻译
Open-vocabulary scene understanding aims to localize and recognize unseen categories beyond the annotated label space. The recent breakthrough of 2D open-vocabulary perception is largely driven by Internet-scale paired image-text data with rich vocabulary concepts. However, this success cannot be directly transferred to 3D scenarios due to the inaccessibility of large-scale 3D-text pairs. To this end, we propose to distill knowledge encoded in pre-trained vision-language (VL) foundation models through captioning multi-view images from 3D, which allows explicitly associating 3D and semantic-rich captions. Further, to facilitate coarse-to-fine visual-semantic representation learning from captions, we design hierarchical 3D-caption pairs, leveraging geometric constraints between 3D scenes and multi-view images. Finally, by employing contrastive learning, the model learns language-aware embeddings that connect 3D and text for open-vocabulary tasks. Our method not only remarkably outperforms baseline methods by 25.8% $\sim$ 44.7% hIoU and 14.5% $\sim$ 50.4% hAP$_{50}$ on open-vocabulary semantic and instance segmentation, but also shows robust transferability on challenging zero-shot domain transfer tasks. Code will be available at https://github.com/CVMI-Lab/PLA.
translated by 谷歌翻译
由于其在多语言翻译,自动驾驶等方面的广泛应用,因此场景文本识别引起了近年来的兴趣。在本报告中,我们描述了我们对词汇表场上的解决方案的解决方案,该解决方案是词汇表场上的文本理解(OOV-ST)挑战,旨在从自然场景图像中提取胶卷外(OOV)单词。我们基于OCLIP的模型在H-Mean中获得28.59 \%,在ECCV2022 TIE Workshop中对OOV挑战的端到端OOV单词识别曲目排名第一。
translated by 谷歌翻译
该报告介绍了我们对ECCV 2022挑战的获胜者解决方案,挑战了播放视频的文本理解(OOV-ST):裁剪单词识别。这项挑战是在所有内容(TIE)中的ECCV 2022讲习班的背景下举行的,该研讨会(TIE)旨在从自然场景图像中提取出播出的单词。在竞争中,我们首先在合成数据集上进行预训练,然后在训练集中对模型进行数据增强进行微调。同时,针对长期和垂直文本进行了专门训练的另外两个型号。最后,我们将不同模型的输出与不同的层,不同的骨干和不同种子结合在一起。当考虑使用唱歌内和播放量的单词时,我们的解决方案的总体单词准确性为69.73%。
translated by 谷歌翻译
神经辐射场(NERF)通过从多视图2D图像中隐式建模3D表示,在新型视图合成中表现出非常令人印象深刻的性能。但是,大多数现有的研究都使用合理的相机姿势初始化或手动制作的摄像头分布来训练NERF模型,这些分布通常不可用或在各种真实世界中很难获取。我们设计了VMRF,这是一种匹配NERF的创新视图,可以进行有效的NERF培训,而无需在相机姿势或相机姿势分布中进行先验知识。 VMRF引入了视图匹配方案,该方案利用了不平衡的最佳传输来制定功能传输计划,以映射带有随机初始化的摄像头姿势的渲染图像,以映射到相应的真实图像。通过功能传输计划作为指导,设计了一种新颖的姿势校准技术,可以通过预测两对渲染图像和真实图像之间的相对姿势转换来纠正最初的随机摄像头姿势。对许多合成数据集进行的广泛实验表明,所提出的VMRF的性能优于最先进的质量和定量,这是大幅度的。
translated by 谷歌翻译
在这项工作中,我们呈现SEQFormer,这是一个令人沮丧的视频实例分段模型。 SEQFormer遵循Vision变换器的原理,该方法模型视频帧之间的实例关系。然而,我们观察到一个独立的实例查询足以捕获视频中的时间序列,但应该独立地使用每个帧进行注意力机制。为此,SEQFormer在每个帧中定位一个实例,并聚合时间信息以学习视频级实例的强大表示,其用于动态地预测每个帧上的掩模序列。实例跟踪自然地实现而不进行跟踪分支或后处理。在YouTube-VIS数据集上,SEQFormer使用Reset-50个骨干和49.0 AP实现47.4个AP,其中Reset-101骨干,没有响铃和吹口哨。此类成果分别显着超过了以前的最先进的性能4.6和4.4。此外,与最近提出的Swin变压器集成,SEQFormer可以实现59.3的高得多。我们希望SEQFormer可能是一个强大的基线,促进了视频实例分段中的未来研究,同时使用更强大,准确,整洁的模型来实现该字段。代码和预先训练的型号在https://github.com/wjf5203/seqformer上公开使用。
translated by 谷歌翻译
生成的型号推理需要机器生成描述日常情景的句子,这是几种概念,最近引起了很多关注。然而,现有模型不能表现和人类,因为它们产生的句子通常是难以置疑和语法的不正确。在本文中,灵感来自人类创造句子的过程,我们提出了一种新颖的知识增强的致辞生成框架,被称为kgr ^ 4,由四个阶段组成:检索,回顾,精炼,重新思考。在此框架下,我们首先执行检索以搜索从外部语料库作为原型的相关句子。然后,我们训练发电机编辑或复制这些原型以生成候选句子,其中基于AutoEncoder的炼油器将修复候选句子。最后,我们从具有不同超参数的生成器产生的候选句子中选择输出句子。对蒙古基准测试的实验结果和深入分析强烈展示了我们框架的有效性。特别是,KGR ^ 4获得官方排行榜中的33.56个香料点,优于前面报告的最佳结果2.49香料点,实现最先进的性能。
translated by 谷歌翻译
人工智能(AI)为简化Covid-19诊断提供了有前景的替代。然而,涉及周围的安全和可信度的担忧阻碍了大规模代表性的医学数据,对临床实践中训练广泛的模型造成了相当大的挑战。为了解决这个问题,我们启动了统一的CT-Covid AI诊断计划(UCADI),其中AI模型可以在没有数据共享的联合学习框架(FL)下在每个主机机构下分发和独立地在没有数据共享的情况下在每个主机机构上执行。在这里,我们认为我们的FL模型通过大的产量(中国测试敏感性/特异性:0.973 / 0.951,英国:0.730 / 0.942),与专业放射科医师的面板实现可比性表现。我们进一步评估了持有的模型(从另外两家医院收集,留出FL)和异构(用造影材料获取)数据,提供了模型所做的决策的视觉解释,并分析了模型之间的权衡联邦培训过程中的性能和沟通成本。我们的研究基于来自位于中国和英国的23家医院的3,336名患者的9,573次胸部计算断层扫描扫描(CTS)。统称,我们的工作提出了利用联邦学习的潜在保留了数字健康的前景。
translated by 谷歌翻译
Leveraging the advances of natural language processing, most recent scene text recognizers adopt an encoder-decoder architecture where text images are first converted to representative features and then a sequence of characters via `sequential decoding'. However, scene text images suffer from rich noises of different sources such as complex background and geometric distortions which often confuse the decoder and lead to incorrect alignment of visual features at noisy decoding time steps. This paper presents I2C2W, a novel scene text recognition technique that is tolerant to geometric and photometric degradation by decomposing scene text recognition into two inter-connected tasks. The first task focuses on image-to-character (I2C) mapping which detects a set of character candidates from images based on different alignments of visual features in an non-sequential way. The second task tackles character-to-word (C2W) mapping which recognizes scene text by decoding words from the detected character candidates. The direct learning from character semantics (instead of noisy image features) corrects falsely detected character candidates effectively which improves the final text recognition accuracy greatly. Extensive experiments over nine public datasets show that the proposed I2C2W outperforms the state-of-the-art by large margins for challenging scene text datasets with various curvature and perspective distortions. It also achieves very competitive recognition performance over multiple normal scene text datasets.
translated by 谷歌翻译
Generative Adversarial Networks (GAN) training process, in most cases, apply Uniform or Gaussian sampling methods in the latent space, which probably spends most of the computation on examples that can be properly handled and easy to generate. Theoretically, importance sampling speeds up stochastic optimization in supervised learning by prioritizing training examples. In this paper, we explore the possibility of adapting importance sampling into adversarial learning. We use importance sampling to replace Uniform and Gaussian sampling methods in the latent space and employ normalizing flow to approximate latent space posterior distribution by density estimation. Empirically, results on MNIST and Fashion-MNIST demonstrate that our method significantly accelerates GAN's optimization while retaining visual fidelity in generated samples.
translated by 谷歌翻译